![]() method for predicting cellular network performance and cellular network monitoring system
专利摘要:
The present invention relates to a system and method for predicting cellular network performance based on observed cell indicator performance data in the cellular network. The method includes accessing a set of observed performance indicator data, performance indicator data including a sequenced measure of performance indicator time for the cellular network. The method then classifies a cell based on observed performance data as one of a high load growth cell and a high non load growth cell. Based on cell classification, the method computes a future value of at least one performance indicator using a predictive model based on test data for the cell in the observed performance indicator data. The predictive model is derived from training data on observed performance indicator data. An indication of the future value of at least one of the performance indicators is issued when the future value exceeds an alarm value. 公开号:BR112019011414A2 申请号:R112019011414 申请日:2017-11-20 公开日:2019-10-22 发明作者:Sheen Baoling;Wang Ji;Yang Jin;Zhao Wenwen;Li Ying 申请人:Huawei Tech Co Ltd; IPC主号:
专利说明:
Invention Patent Descriptive Report for PREDI · PERFORMANCE INDICATORS IN CELLULAR NETWORKS ' 8 . CROSS REFERENCE TO RELATED APPLICATIONS [0001] This application claims the benefit of US Non-Provisional Patent Application Serial Number 15 / 373,177, filed on December 8, 2016 and entitled Prediction of Performance Indicators in Cellular Networks, which is here incorporated by reference as if reproduced in its entirety. TECHNICAL FIELD [0002] The description refers to the maintenance of cellular networks by predicting problems with cell performance based on key performance and quality indicators. BACKGROUND [0003] The performance of a cellular network is affected by a collection of factors such as the load of data and voice traffic, RF coverage, the level of inter-cell interference, the location of users, and hardware failures. In many cases, the performance of some wireless cells within a cellular network may seem abnormal, and the mobile users who are served by these cells will suffer from the poor user experience. A bad user experience will give rise to customer dissatisfaction. [0004] Cellular network operators often need to detect abnormal behavior and then take action to resolve problems before the situation deteriorates. Operators rely on so-called Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs) to measure the performance of a cellular network. KPIs or KQIs such as access configuration success rate, average cell throughput, or average throughput per user device reflect qualiPetition 870190051907, dated 06/03/2019, pg. 74/121 2/33 network quality and user experience. These performance indicators are closely monitored by operators. Operators use these performance indicators to predict future KPIs when traffic or the number of users in a cell is growing or before any network changes take place. [0005] A KPI is generally a time series quantification of a specific performance factor that indicates the performance of the network. Examples include average downlink cell yield that includes all user equipment in each cell in a cellular network, average downlink yield per user equipment in each cell in a cellular network, or total transmitted bits in each cell in a cell. cellular network. An accurate prediction of KPIs is very important in service provisioning and network planning, as well as predicting whether the supported network capacity is meeting the demand for user equipment. If not, network managers can, for example, add new base stations to the network to resolve potential resource or capacity issues. SUMMARY [0006] One aspect of the description includes a processor-implemented method that predicts cellular network performance based on observed performance indicator data from cells in the cellular network. The method includes accessing a set of observed performance indicator data, performance indicator data including a sequenced measure of performance indicator time for the cellular network. The method then classifies the observed performance data based on a cell as one from a high-load cell and a high-load cell. Based on the classification of the cell, the method computes a future value of at least one performance indicator using a predictive model based on test data paPetição 870190051907, from 06/03/2019, pg. 75/121 3/33 to the cell in the observed performance indicator data. Pr predictive model is derived from training data on the observed performance indicator data. An indication of the future value of at least one of the performance indicators is issued when the future value exceeds an alarm value. [0007] Another aspect of the description includes a non-transitory, computer-readable medium that stores computer instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: accessing a data indicator data set observed performance, performance indicator data including a sequenced measure of performance indicator time for a cellular network; compute future values of at least one performance indicator using a set predictive model based on test data for cells classified as high load growth cells in the observed performance indicator data, the set predictive model derived from training data for at least a portion of cells classified as high load cells in the observed performance indicator data and including at least two predictive models for high load growth cells. The non-transitory computer-readable medium also includes instructions that, when executed by one or more processors, cause one or more processors to issue an indication of a future value of at least one of the performance indicators when the future value exceeds a value of alarm. Other embodiments of this aspect include computer systems, apparatus, and corresponding computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. [0008] An additional aspect includes a monitoring system Petition 870190051907, of 6/3/2019, p. 76/121 4/33 cellular network. The cellular network monitoring system includes a processing system that includes at least one processor, storage attached to the processor, and a network interface. Instructions stored in storage are operable to instruct at least one processor to: access a set of observed performance indicator data, performance indicator data including a sequenced measure of performance indicator time for the cellular network; classifying the observed performance data based on a cell from which the data originated including classifying the cell as one of a high load growth cell and a high non growth load cell; compute future values of at least one performance indicator using at least one set predictive model based on test data for cells classified as high load growth cells in the observed performance indicator data, the set predictive model derived from data of training for peto minus a portion of cells classified as high load cells in the observed performance indicator data and including at least two predictive models for high load growth cells where each peto minus two predictive models are based on data training a different high-load cell, all high-load cells and high non-load cells; and issue an indication of a future value of at least one of the performance indicators when the future value exceeds an alarm value. [0009] An additional aspect of the description includes a cellular network monitoring system comprising an access element that accesses a set of observed performance indicator data, the performance indicator data comprising performance indicators for a cellular network measured Petition 870190051907, of 6/3/2019, p. 77/121 5/33 in a time sequence; a computing element that computes future values of at least one performance indicator using a set predictive model based on test data in the observed performance indicator data set, training data for cells classified as high growth cells of load or high non-load growth cells, the set predictive model derived from training data for at least a portion of cells classified as high load cells in the observed performance indicator data and comprising at least two predictive models for the cells high load growth; and an output element that emits a future value of at least one of the performance indicators. [0010] An additional aspect of the description includes a cellular network monitoring system, which comprises an access element that accesses a set of observed performance indicator data from performance indicators for the cellular network; a classification element that classifies a cell based on the observed performance data including classifying the cell as one of a high load growth cell or a high non-growth load cell; a computing element that computes future peto values minus a performance indicator using at least one set predictive model based on test data for cells classified as high load growth cells based on the observed performance indicator data, the predictive set model derived from training data for at least a portion of cells classified as high load cells in the observed performance indicator data and comprising at least two predictive models for high load growth cells where each of the at least two predictive models are based on training data from Petition 870190051907, of 6/3/2019, p. 78/121 6/33 a different from high-load cells, all high-load cells and high non-load cells; and an output element that emits a future value of at least one of the performance indicators. [0011] Optionally, in any of the above mentioned aspects, the observed performance indicator data set comprises performance indicator data for a period of time, and the computation comprises computing a set predictive model combining future values of computing by at least two predictive models for high load growth cells. [0012] Optionally, in any of the aspects mentioned above, the method includes a first predictive model, a second predictive model, and a third predictive model. Each model predicts future high-load cell values for at least one performance indicator. The first predictive model is based on training data from high load growth cells and computing future values using the set predictive model comprises computing the first predictive model with test data from high load growth cells. The second predictive model is based on the training data of all high load cells and computing using the set predictive model still comprises computing the second predictive model with the test data of high load growth cells. The third predictive model is based on training data from all high load growth cells and computing using the set predictive model still comprises computing the third predictive model with high load growth cell test data. [0013] Optionally, in any of the aspects mentioned above, the performance indicator data set still includes validation data, and the method still comprises determining the Petition 870190051907, of 6/3/2019, p. 79/121 7/33 future values for issuance by the method: comparing predicted future values of at least one performance indicator calculated by each of the first predictive model, the second predictive model and the third predictive model for at least one real value of at least one indicator performance in the validation data, and if all future predicted values are higher or lower than at least one real value, select a future value predicted by the predictive model that has the least relative error for the validation data; or if not all predicted future values are higher or lower than at least one actual value, then merge the predicted future values of each of the first predictive model, the second predictive model and the third predictive model into a set of predictive values weighting an error of each model in relation to the peto minus a real value of the performance indicator. [0014] This Summary is provided to introduce a selection of concepts in a simplified form, which are further described below in the Detailed Description. This Summary is not intended to identify key characteristics or essential characteristics of the claimed subject, nor is it intended to be used as an aid to determine the scope of the claimed subject. BRIEF DESCRIPTION OF THE DRAWINGS [0015] Figure 1 presents functional and structural components of an exemplary system in which the present system and method can be implemented, including a block diagram of a suitable processing device to implement the system and method. [0016] Figure 2 is a flow chart that illustrates a method for predicting key performance indicators based on cell classification in a cellular network according to the present description. [0017] Figure 3 is a flow chart that illustrates a first proposal to predict KPI for a first cell classification in the Petition 870190051907, of 6/3/2019, p. 80/121 8/33 cell. [0018] Figure 4 is a flowchart that illustrates a second proposal to predict KPI for a second classification of cells in the cellular network. [0019] Figure 5 is a flowchart that illustrates a third set proposal to predict KPI for a third classification of cells in the cellular network. [0020] Figure 6 is a flow chart that illustrates a step in Figure 5 of choosing the best prediction model. DETAILED DESCRIPTION [0021] A system and method are described for accurately predicting performance indicator cells in a network cell and providing an alarm or warning issued to alert the operator to take precautions before an additional network performance degradation or experience. user occurs. The description, in one aspect, uses multiple proposals and different training data to create predictive models based on cell classifications, and predicts future values of performance indicators that use the different proposals to improve the prediction accuracy. In one proposal, multiple predictive models are used to predict future values for the same test data from a cell classification. In this joint proposal, the best predicted future values are selected or the values are merged to integrate the strength of the multiple algorithms together to improve the accuracy of the prediction. [0022] The description discussed here will be discussed by means of examples with respect to key performance indicators (KPIs). It will be recognized that the techniques here can also be applied to the prediction or other performance indicators of a cellular network, such as key quality indicators (KQIs), indicators for quality of service (QoS) and indicators for display quality 870190051907, of 06/03/2019, p. 81/121 9/33 experiences. (QoE) [0023] The description uses multiple proposals and different training data for the cells to create different performance indicator prediction proposals. The performance indicator data observed from the network is used as training, testing and in some cases validation data. The cells in the observed data are classified based on load in medium and high load cells, with the high load cells still classified in growth and non-growth cells. In a joint proposal, more than one algorithm for the prediction of future predictive indicators is created and the prediction results intelligently selected or merged based on the result of the multiple algorithms. [0024] An accurate prediction of KPIs is very important for network operators as it allows them to predict whether the supported network capacity can meet the performance requirements expected by network users and to determine if any corrective operations need to happen. This description describes a solution to precisely predict KPIs of network elements using multiple prediction methods. Traditionally, network performance indicators (KPI / KQI) are estimated based on engineering experience by trend analysis, for example, by predicting KPI values as the mean values of historical data, or with a certain defined delta. This is time consuming and usually not portable to other markets with different traffic models or scenarios. [0025] Using multiple proposals under different conditions for cells can improve the prediction accuracy for each individual condition relative to the traditional prediction proposal which uses a single proposal to predict KPI for all cell conditions. Joint learning uses the strength of multiple algorithms together to improve the accuracy of the prediction in relation to proposition 870190051907, from 06/03/2019, p. 82/121 10/33 ta of unique algorithm. [0026] Figure 1 presents functional and structural components of a modality of a network monitoring system 195 which performs network analysis. Figure 1 includes a network 100 which is the subject network to be monitored using the system. Although only one network is illustrated, multiple networks can be monitored, each having its own database of impressions built on the basis of historical network data from that network and engineering data. [0027] Network 100 can comprise any wired or wireless network that provides communication connectivity for the devices. Network 100 may include various cellular network and packet data network components such as a base transceiver station (BTS), a ~ B node, a developed B-node (eNodeB), a base station controller ( BSC), a radio network controller (RNC), a service GPRS support node (SGSN), a port GPRS support node (GGSN), a WAP port, mobile switching center (MSC), short message service (SMSC), residential location records (HLR), visitor location records (VLR), an Internet protocol multimedia subsystem (IMS), and / or the like. Network 100 can employ any of the known and available communication protocols, such as Code Division Multiple Access (CDMA), Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), Long Term Evolution ( LTE), or any other network protocol that facilitates communication between communication network 100 and network enabled devices. Communication network 100 may also be compatible with future mobile communication standards, including, but not limited to, Advanced LTE and Advanced WIMAX. Network 100 may include other types of devices and nodes to receive and transmit 870190051907, dated 06/03/2019, pg. 83/121 11/33 mitigate voice, data and combination information to and from radio transceivers, networks, the Internet and other content delivery networks. The network can support communication from any portable or non-portable communication device that has a network connectivity function, such as a cell phone, a computer, a tablet, and the like, can operatively connect to the 100 communication network. [0028] Key Performance Indicators (KPIs) are internal indicators based on network counters referenced in time. Such KPIs are evaluated in the context of other accountants and related to KQIs. Each KPI can be a measure referenced in the indicator's time. Variations in each KPI can be tracked for a time indication. Network KPIs can be measured and monitored using standard interfaces defined on the wireless network. These KPIs include multiple network performance counters and timers. For example, in a mobile data service network, service accessibility can be determined through the Packet Data Protocol Context Activation Success Rate (PDP) KPI, which can be an aggregate reason for successful PDP context for PDP context attempts. This KPI indicates the mobile subscriber's ability to access the packet-switched service. [0029] Several exemplary KPIs referenced here include: PS.Service.Downlink.Average.Throughput - Average downlink service traffic throughput at the cell level; L.Traffic.ActiveUser.DL.QCI.Total - The number of user devices activated from QCI 1 to QCI 9 in the cell-level downlink buffer; L.Thrp.bits.DL - The volume of total downlink traffic for packet data convergence protocol (PDCP) service data units (SDUs) in a cell; and Petition 870190051907, of 6/3/2019, p. 84/121 12/33 LThrp.bits.UL - 0 total uplink traffic volume for PDCP protocol data units (PDUs) in a cell. [0030] As discussed below, these KPIs can be used in the description here as training, testing or validation data in order to predict future trends for such KPIs in a network analysis that can then alert network administrators to potential problems with a or more cells, allowing the network administrator to resolve such problems before they affect a user's experience. [0031] Returning to Figure 1, a network monitoring system 195 can include a processing device 102. Figure 1 shows a block of a processing device 102 suitable for implementing the system and method. The processing device 102 may include, for example, a processor 110, a random access memory (RAM) 120, a non-volatile storage 130, a display unit (output device) 150, an input device 160, and a network interface device 140. In certain embodiments, processing device 102 may be incorporated into a personal computer, mobile computer, mobile phone, tablet, or other suitable processing device. [0032] Illustrated in the non-volatile storage 130 are functional components which can be implemented by operable instructions to make the processor 110 implement one or more of the processes described below. Although illustrated as part of non-volatile storage 130, such instructions can be operable to cause the processor to perform various processes described herein using any one or more of the hardware components illustrated in Figure 1. These functional components include a network monitor 132 , a cell sorter 134, and a KPI analyzer 135. Also shown in non-volatile storage 130 esPetition 870190051907, 06/06/2019, p. 85/121 13/33 is a database 190 which stores network data 115 gathered from the cellular network 100. [0033] Non-volatile storage 130 may comprise any combination of one or more computer-readable media. Readable media can be a computer-readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic or semiconductor system, apparatus or device, or any suitable combination of the above. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: a computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), a memory only programmable erasable readout (EPROM or instant memory), an appropriate optical fiber with a repeater, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of above. In the context of this document, a computer-readable storage medium can be any tangible medium that it may contain, or store a program for use by or in connection with a system, apparatus, or instruction execution device. [0034] Processing device 102 may include a set of instructions that can be executed to cause computer system 102 to perform any one or more of the computer-based methods or functions described herein. A computer program code to perform operations for aspects of the present description can be written in any combination of one or more programming languages, including an object-oriented programming language, procesPetição 8et language, Petition 870190051907, 06/03/2019 , p. 86/121 14/33 conventional conditions. The program code can be run entirely on computer system 102, partially on computer system 102, as a standalone software package, partially on computer system 102 and partially on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer can be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection can be made to an external computer (for example, over the Internet using an Internet service provider) or in a cloud computing environment or offered as a service. [0035] As illustrated in Figure 1, processing system 102 includes a processor 110. A processor 110 for processing device 102 is configured to execute software instructions to perform functions as described in the various embodiments here. A processor 110 for a processing device 102 may be a general purpose processor or may be part of an application specific integrated circuit (ASIC). A processor 110 for a processing device 102 can also be a microprocessor, a microcomputer, a processor chip, a controller, a microcontroller, a digital signal processor (DSP), a state machine, or a programmable logic device. A processor 110 for a processing device 102 may also be a logic circuit, including a programmable port network (PGA) such as a field programmable port network (FPGA), or another type of circuit that includes a discrete port and / or transistor logic. A processor 110 for a processing device 102 can be a central processing unit (CPU), a graphics processing unit (GPU), or both. In addition, any processor described here may include multiple processed petition 870190051907, of 6/3/2019, p. 87/121 15/33 res, parallel processors, or both. Multiple processors can be included in, or coupled with, a single device or multiple devices. [0036] Furthermore, the processing device 102 includes a RAM 120 and a non-volatile storage 130 that can communicate with each other, and with processor 110, over a bus 108. [0037] As shown, the processing device 102 may further include a display unit (output device) 150, such as a liquid crystal display (LCD), an organic light emitting diode (OLED), a display of flat panel, a solid state display, or a cathode ray tube (CRT). In addition, the image processor may include an input device 160, such as a keyboard / virtual keyboard or touch-sensitive input screen or speech input with speech recognition, and which may include a cursor control device, such as such as a mouse or touch-sensitive input screen or keyboard. [0038] A network monitor 132 is a component which can query or otherwise allow the accumulation of network data 115 at the command of an administrator or operator of the 195 network monitoring system. Information can be accumulated periodically, intermittently or whenever a new KPI analysis of a cellular network 100 is desired. [0039] The functions of cell classifier 134 and KPI analyzer 135 are discussed here. [0040] Figure 2 is a flow chart illustrating a method according to the present description. Although the method is described with respect to KPIs, it will be recognized that the method can also be used for other forms of performance indicator data such as KQIs for a cell phone or other network. The method in Figure 2 illustrates a Petition 870190051907, of 6/3/2019, p. 88/121 16/33 total analysis method that generates an output - such as an alarm for a network administrator - to allow proactive management of a cellular network. The alarm can be generated when a prediction of a future KPI value exceeds an alarm threshold value. The method in Figure 2 can be repeated at intervals selected by the network administrator or a service provider that operates the network monitoring system 195 as a service for one or more providers or cellular network administrators. In any case, a set of KPI data over a period of time is retrieved from the cellular network for which the analysis is performed and the analysis in Figure 2 performed to provide a future prediction of KPI values which can alert the administrator for potential network problems. The method can then be repeated as needed. [0041] In 210, KPI data over a defined period is accumulated. The KPI data in step 210 can be observed network data 115 illustrated in Figure 1. The network data 115 can be accumulated over a suitable period of time to provide sufficient data to perform the calculations and predictions described here. An example of an adequate time period to provide sufficient data to perform the calculations and predictions described here is a two month time period. The observed network data 115 can be subdivided as described here into training data, test data and validation data. For example, in a data set of 115 observed network data comprising two months of observed KPI data, four (4) weeks of data can be used to train the algorithms discussed here (training data), two (2) weeks of observed data can be used to predict future KPI (test data), and two (2) weeks of data (validation data) can be used to determine errors in KPI predictions. Petition 870190051907, of 6/3/2019, p. 89/121 17/33 [0042] After the observed KPI data is retrieved in 210, the description classifies the data based on the cell from which the data originated. As used herein, in a context, the terms cells may refer to the data observed in the observed data 115 associated with a specific cell in the cellular network 100. [0043] Steps 215, 220, 225, 230, 245 and 255 are shown as grouped as classifier steps 134a, and represent functions which can be performed by classifier 134. In 215, cell data with an extremely light load are filtered. For extremely light traffic load cells, there is generally no need to predict KPI since such cells are not in growth mode or in danger of adversely affecting a user's experience. Such cells generally did not meet resource or capacity restrictions. In one embodiment, a very low traffic load limit is first defined using an identified traffic indicator (KPI) and the limit used to determine the extremely light traffic load cells in 215. An example to determine a traffic cell extremely light is to use a limit for a cell where more than 10% of cell observations that have L.Traffic.ActiveUser.DL.QCI.Total (the average number of active downlink users across all QCI's) less than 0 ,1. This limit can usually be determined based on engineering knowledge or adjusted by the administrator or operator of the 195 network monitoring system. [0044] The remaining cell data after filtering step 215 belongs to non-light traffic load cells. Such cells experience what can be characterized as a medium to heavy traffic condition, and will be used to predict future KPI changes in cells likely to experience growth or resource constraints and thus adversely affect the experience of Petition 870190051907, of 6/3/2019, p. 90/121 18/33 user. As discussed below, the data from these remaining cells is further classified into medium traffic load cells and heavy traffic load cells. Heavy traffic load cell data is used to train prediction models as such cells are more likely to experience performance problems. Cells with relatively lower traffic load (average traffic load cells) are not used to train prediction models since their performance is likely to be acceptable given current analysis. Later on, trained models built using heavy traffic load cells would be used to predict the KPIs of the relatively lower traffic load cells. [0045] In step 220, a first determination is made as to whether or not a cell meets a criterion for building a predictive model. In 220, the criterion for model building may be those cells which have a heavy traffic load (as opposed to medium) during the busy periods of the cellular network 100. These cells which are heavy traffic load cells are those which can be characterized as having a KPI above a certain limit, for example 85% capacity of a KPI indicative of measurable traffic, during peak usage periods. In 220, those cells that do not meet the heavy load criteria for model building will be those cells with a relatively low traffic load (average traffic) compared to heavy traffic load cells, as well as cells that can be deployed on network 100 after the distribution of heavy traffic load cells. At 245, those cells that do not meet the heavy load criterion are referred to as A2 cells. Such cells are less likely to see faster traffic growth compared to cells with heavy traffic loads. Petition 870190051907, of 6/3/2019, p. 91/121 19/33 [0046] High traffic cells will be classified as ΑΓ cells, and further classified into growth and non-growth cells in steps 225, 230 and 255. In 225, a determination is made as to whether traffic A1 cells high are likely to see traffic growth. If not, at 230, A1 cells are further classified as A1, Phase 1 (A1P1) cells as cells unlikely to grow. If a cell is likely to see growth, then at 255 the cell is classified as an A1, Phase 2 (A1P2) cell. [0047] At 225, conditions for determining traffic growth (at the cell level) include whether more than G% (where G, in one example, is 35%) of test observations are out of character. In this case, out of character range means that at least one KPI characteristic value in observed data is greater than, for example, the 95th percentile value of a training data set. For example, an average of LThrp.bits.DL test observations is greater than the 95th percentile value of LThrp.bits.DL training observations. [0048] Depending on the classification, one of three different predictive proposals is used to predict KPI based on the observed network data for the cell. [0049] Steps 235, 250, 260 represent functions which can be performed by the KPI analyzer 135 in Figure 1. For cells classified A2 (average traffic or later distributed), in 250, a prediction proposal referred to here as Proposal 2 it is used. For cells A1P1 (heavy traffic, without growth), in 235, a prediction proposal referred to here as Proposition 1 is used. For cells A1P2 (heavy traffic, growth), in 255, a prediction proposal referred to here as Proposal 3, sometimes referred to here as a joint proposal, is used. Petition 870190051907, of 6/3/2019, p. 92/121 20/33 [0050] After each of the proposals - Proposal 1, Proposal 2, or Proposal 3 is used - one output from the prediction analysis can be provided in 240. Each of the respective proposals - Proposal 1, Proposal 2, or Proposal 3 - is discussed in more detail below. Each of the respective proposals can be executed by the KPI analyzer 135 in Figure 1. The output can take the form of a display on a display output device 150 or any other suitable output form such as an electronic report, which can be provided to the cellular network operator. The output can be, for example, a total report of future values of performance indicators such as KPIs and KQIs considered by the predictive system here, or an alarm for a specific KPI or KQI when one or more of the predicted future values of a performance indicator performance exceeds an alarm threshold. Each of the calculations in the respective proposals discussed here can alternatively be referred to as a predictive model or a predictive calculation, the result of which is a future value of a performance indicator such as a KPI or KQI. [0051] Figure 3 is a flowchart that illustrates Proposal 1 discussed above in relation to step 250 of Figure 2 to predict future values of performance indicators. In 310, the training data for all cells classified A1 (heavy traffic load) are recovered. As noted above, the training data will comprise a subset of data observed in a data set 115 for the network 110 over a specific period of time. In one example, the time period is 2 months. In such an example, the training data can, for example, comprise the first month of observed data for the cellular network 100. At 315, a predictive model comprising a global level regression model is trained based on the training data observed data 115 from step 310. As is known, global level regression analyzes Petition 870190051907, of 6/3/2019, p. 93/121 21/33 can be used to characterize how the value of some dependent variable changes as one or more independent variables are varied. In one modality, the regression algorithm used in the prediction of the identified network KPI dependent variable is a Generalized Additive Model (GAM). In one embodiment, the dependent variable used is: PS.Service.Downlink.Average.Throughput (the average downlink service traffic yield at the cell level), and the independent variables are: Ι.ΤΓ3ΐΐίο.Αοϋν6υ36Γ.ΟΙ.ΟΟΙ .Τοΐ3ΐ (the number of user devices activated from QCI (QoS Class Identifier) 1 to QCI 9 in downlink temporary storage in a cell); LThrp.bits.DL (the volume of total downlink traffic for PDCP SDUs in a cell) and LThrp.bits.UL (the volume of total uplink traffic for PDCP PDUs in a cell). [0052] Next, 320, for each cell, the training error parameters for each cell are calculated. The training error can be calculated by any good fit value (or coefficient of determination) such as R 2 , a prediction error such as the mean square error (RMSE), mean absolute percentage deviation (PMAD) and mean absolute percentage error (MAPE). [0053] In 325, all A1 cells are grouped into K groups based on their training error parameters. For example, k means of observing group partitions (in this case the training error parameters) in k groups with each observation belongs to the group with the closest mean, serving as a prototype of the group. [0054] In 330, K predictive group level regression models are trained, one for each kth group determined in step 325. These group level regression models are used in both Proposal 1 and Proposal 2 as well , as discussed below. Petition 870190051907, of 6/3/2019, p. 94/121 22/33 [0055] In 335, for each A1P1 cell, test data for cell A1P1 are used as input for the k-th model of the cluster to which the cell belongs to predict KPIs for the cell. At 340, for each A1P1 cell, the model used for the cell is evaluated against data validation of the original observed data set. [0056] Although A1P1 cells already exhibit a high traffic load, an output from the analysis of Proposal 1 may indicate potential future problems with A1P1 cells going forward. As noted, a report on these predictions in future KPIs is issued at 240 in Figure 2. [0057] Figure 4 is a flowchart that illustrates a method for executing Proposal 2 discussed above in relation to Figure 2 to predict future values of performance indicators. At 410, data from all A2 cells (average traffic) in the observed data 115 are retrieved. The K group level (predictive) models created in Proposal 1 in Figure 3 are retrieved in 415. In 420, for each A2 cell data, and in 425 for each of the K models, A2 cell data is used as input for each of the k-th models to calculate a KPI prediction at 435 for each such cell. [0058] In 440, the training error of the k-th models is determined. Again, the error can be a good adjustment value (or coefficient of determination) such as R 2 , a prediction error such as the mean square error (RMSE), mean percentage absolute deviation (PMAD) and mean percentage absolute error (MAPE ) for each of the K regression models. [0059] In 445, if a next model is present for cell A2 under consideration (in 420), the method returns to step 425 and repeats steps 430 to 440 until all K model numbers have been calculated for each cell . If no additional model Petition 870190051907, of 6/3/2019, p. 95/121 23/33 should be considered at 445, so a determination is made at 450 as to whether or not a better model can be chosen. [0060] At 450, if the prediction errors for all K regression models are greater than a predefined limit, then it is determined that no suitable model can be selected for the cell and a warning is issued at 455. The results of prediction for the cell will be signaled with a warning indicator at 455. The predefined limit can be defined as based on the measurement used in predicting the prediction error. In a modality, where the prediction error is calculated by PMAD, a limit of 30% can be used. If at 450 one or more prediction errors are below the threshold, then for each cell, the regression model with the lowest prediction error (using the PMAD example) for the training observation is selected as the best model at 460 The threshold can be configurable by the cellular network provider or network monitoring system administrator. [0061] If another cell is to be calculated at 465, then the method returns to step 420 to repeat the calculations (425 - 460) for the next cell. Since all 465 cells are complete, then use the best cluster level model for each cell, the KPI is predicted for each such A2 cell at 470 based on the test data available on the observed data. [0062] In 475, optionally, the prediction performance evaluation for A2 cells can be evaluated using the validation data on the observed data for the same A2 cells. [0063] As noted above, Proposal 3 uses three different algorithms to perform KPI prediction for A1P2 cells (high load, growth). [0064] Figure 5 illustrates a modality of the present description to execute Proposal 3 used to predict future values of Petition 870190051907, of 6/3/2019, p. 96/121 24/33 KPIs for high-load cells where traffic is likely to increase over time. In Proposal 3, in one modality, multiple algorithms are used for prediction, and the algorithms are integrated together to form a set of multiple algorithms. This provides a robust prediction of multiple algorithms and improves the accuracy of KPI predictions for cellular network operators. [0065] At 510, observed network performance data 115 for cells A1 is accumulated at 510. Again, network data 115 includes training data, test data, and validation data. [0066] In 515 a autoregressive predictive model is developed based on training data for A1P2 cells. The autoregressive model is sometimes referred to here as Algorithm 1. An autoregressive algorithm can be performed by any number of autoregressive techniques, including, for example, vector autoregressive mode, autoregressive moving average model and other autoregressive models. At 520, future KPI performance indicator values for each of the A1P2 cells are predicted at 520 based on the models trained using A1P2 test data as input for the autoregressive model. [0067] Table 1 is an example of the prediction performance of Algorithm 1 in terms of a performance error calculated on a real-world data set from the observed data 115 for a cellular network: TABLE 1 R2 RMSE MAP PMAD Algorithm 1 0.537818 1525,751 0.149063 0.142817 [0068] As illustrated in Table 1, in one embodiment, Algorithm 1 provides sufficient precision for predicting A1P2 cell KPIs to allow error correction by a network operator. Petition 870190051907, of 6/3/2019, p. 97/121 25/33 [0069] In 525, K predictive cluster models are trained with training data for all A1 cells in a way like the one described above with respect to Figures 3 and 4. One difference here is that data from all A1 cells (both A1P1 and A1P2 cells) are used in training the K cluster models in step 525. [0070] In 530, these cluster models are used to predict future KPI values for A1P2 cells based on A1P2 test data as input to the models. [0071] In 535, Algorithm 3 starts in a similar way to step 525, where K cluster predictive models are trained with only A1P1 cell data. At 540, future KPI performance indicator values are predicted for A1P2 data using the K group models trained with the A1P1 cell data from step 535 based on the models trained using A1P2 test data as input to the models. [0072] In 545, the best prediction is selected. A method for selecting the best prediction is described in relation to Figure 6. Optionally, at 550, the prediction analysis can be validated using the validation data for each of the respective algorithms. [0073] Figure 6 illustrates a set method of determining the best prediction. For each A1P2 cell in 610, a comparison of the KPI prediction results for Algorithms 1 - 3 against the respective validation data for the observed KPI input data for each Algorithm is made in 615. In 620, the prediction error for each Algorithm it is calculated with the cross-validation data at 620. In step 620, there may be more than one validation observation for the same time interval in the validation data set. In one mode, at 620 the average prediction value (across multiple prediction results if applicable) for each Petition 870190051907, of 6/3/2019, p. 98/121 26/33 method is calculated and the average real value (through multiple observations if applicable) in the validation data set is calculated. These mean values are then compared to determine the prediction error at 620. If, however, there is only one observation of a KPI validation data set (for the same interval) then computation of mean values at this stage is not necessary , and the only predicted KPI and observed KPI are compared. [0074] From the prediction and cross-validation error in steps 615 and 620, two cases resulted: [0075] In 625, if after the cross-validation in 620, all the total predictions of the three Algorithms are higher than the total real KPI values (s) for observation (from the cross-validation data), or if all the total KPI predictions of the three algorithms are lower than the actual KPI value (s) then at 630, the best result is chosen as the Algorithm prediction result with the least cross-validation error (in terms of , for example, PMAD as calculated in step 620) as the result of set prediction, and the method returns to 610 to predict the next cell A1P2. [0076] Returning to step 625, if not all the predictions of the three algorithms are all higher or lower than the actual KPI value, then in step 635, two of the three total prediction results of the Algorithms are higher than that the total real KPI value of the observed data while a total prediction of the Algorithm is lower, or a total prediction of the algorithm is higher, while the other two total predictions of the algorithms are lower. If this is the case, then the results of the three Algorithms are merged into 640. [0077] In this context, the total predictions include computing whether the single or average prediction result for the KPI / KQI identifiPetition 870190051907, of 6/3/2019, p. 99/121 27/33 is higher / lower than the individual or average values of all 3 methods. If so, then the method moves to step 630 to select the lowest prediction error rate to predict future values for the identified KPI / KQI. If not, (meaning that at least one method does not agree), then the method moves to steps 635 and 640, calculating the weights for all 3 methods using the formula below and generating the final prediction results accordingly. [0078] In 640, the fusion of the results of the three Algorithms in a set predictive value which can be calculated according to the following formulas: P & W = 4 * Predit -4 -f- will .wj Pe.ço t2 ' PredítítTS ^ where, 1 log , PM4D S1 + lo.g In the above fusion formula, a relatively higher weight is given for the model result which was determined to have the lowest prediction error rate and a relatively lower weight is given for the model result which was determined to have the highest prediction error rate. [0079] It must be recognized that not all three algorithms (Algorithms 1 - 3) need to be used in all modalities. In one mode, only one of the three algorithms is used. In another embodiment, any combination of two of the three algorithms is used. In another modality, all three are used and in an additional modality, more than three algorithms are used. [0080] Each of the calculations in the respective algorithms discussed here can alternatively be referred to as a predictive model Petition 870190051907, of 6/3/2019, p. 100/121 28/33 or a predictive calculation since such algorithms represent the calculations that result in a future value of a performance indicator such as a Key Performance Indicator or Key Quality Indicator. These future values represent the prediction results which are merged in Proposal 3. [0081] Table 2 below illustrates the example of prediction performance of the set of Algorithm 1 and Algorithm 2 using the same sample data from a cellular network as illustrated in Table 1. Table 3 shows the example of prediction performance of set of all 3 Algorithms. [0082] Comparing tables 2 and 3, together with Table 1 above, the set of two Algorithms proves to have a better prediction performance than using individual Algorithm 1 alone, and the set of three algorithms additionally improves the prediction performance at the top of the set of two Algorithms. Table 2: An example of the prediction performance of algorithm 1 and algorithm 2 R2 RMSE MAP PMAD Algorithm 1 0.537818 1525,751 0.149063 0.142817 Algorithm 2 0.523446 1549,291 0.1777752 0.160132 Set of 2 Algorithms 0.597328 1424,140 0.137205 0.1331301 Medium Improvement 12.6% 7.4% 15.4% 13.1% Table 3: An example of the performance of the set of all 3 algorithms R2 RMSE MAP PMAD Algorithm 1 0.537818 1525,751 0.149063 0.142817 Algorithm 2 0.523446 1549,291 0.1777752 0.160132 Algorithm 3 0.569067 1473,269 0.166928 0.152629 Set of 3 Algorithms 0.636483 1353,129 0.1335384 0.128654 Medium Improvement 17.2% 10.7% 17.7% 15.1% Petition 870190051907, of 6/3/2019, p. 101/121 29/33 [0083] Using the set of three algorithms, the average improvement in R 2 is 17.2% and the improvement in percentage of mean absolute deviation (PMAD) is 5.1%. [0084] It will be recognized that numerous alternative forms of exit can be provided. Alternatively, a user interface can provide an alert for one or more quality or performance indicators experiencing an anomaly, with the interface providing a facility to provide additional information about the root cause (or confidence or survey ordered by causes of potential roots). [0085] The memories described here are tangible storage media that can store executable data and instructions, and are non-transitory for as long as the instructions are stored in them. A memory described here is a manufacturing article and / or machine component. The memories as described here are computer-readable media from which executable data and instructions can be read by a computer. The memories as described here can be random access memory (RAM), read-only memory (ROM), instant memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, tape, read-only compact disk (CD-ROM) memory, digital versatile disk (DVD), floppy disk, Blu-ray disk, or any other form of storage medium known in the art. Memories can be volatile or non-volatile, secure and / or encrypted, non-secure and / or unencrypted. [0086] Aspects of the present description are described herein with reference to flowchart illustrations and / or block diagrams of methods, apparatus (systems) and computer program productsPetition 870190051907, of 06/03/2019, p. 102/121 30/33 according to the description. It will be understood that each block of the flowchart and / or block diagram illustrations, and combinations of blocks in the flowchart illustrations and / or block diagrams, can be implemented by computer program instructions. These computer program instructions can be provided for a general purpose computer processor, special use computer, or other programmable data processing device to produce a machine, so that the instructions, which execute through the computer's processor computer or other programmable instruction execution apparatus, create a mechanism to implement the functions / acts specified in the flowchart and / or block or block diagram blocks. [0087] These computer program instructions can also be stored in a computer-readable medium that when executed can target a computer, another programmable data processing device, or other devices to function in a specific mode so that instructions when stored in a computer-readable medium produce a manufacturing article including instructions which, when executed, cause a computer to implement the function / act specified in the flowchart and / or block or block diagram blocks. Such a computer-readable medium specifically excludes signals. Computer program instructions can also be loaded onto a computer, another programmable instruction execution device, or other devices to cause a series of operational steps to be performed on the computer, other programmable devices or other devices to produce an implemented process by computer so that the instructions which they execute on the computer or other programmable device originate processes to implement the functions / acts specified in the flowchart and / or day block or blocks Petition 870190051907, of 06/03/2019, p. 103/121 31/33 gram of blocks. [0088] The subject here advantageously provides a method implemented by processor of: accessing a set of observed performance indicator data, the performance indicator data comprising a sequenced measure of performance indicator time for the cellular network; classifying the observed performance data based on a cell from which the data originated, the classification including classifying the cell as one of a high load growth cell and a high non-growth cell; based on the classification, compute a future value of at least one performance indicator using a predictive model based on test data for the cell in the observed performance indicator data, the predictive model derived from training data in the performance indicator data observed performance; and issue an indication of the future value of at least one of the performance indicators when the future value exceeds an alarm value. [0089] In one aspect, a description includes a method of operating a cellular network where the network operator observes network data over a period of time, and executes the method implemented by the processor to determine the problems with the network. The network operator then takes corrective action on the network, for example, adding new base stations to the network to resolve potential problems. [0090] In one aspect, the description includes a method of providing information to the operator of a cellular network by a service provider. The method includes receiving, observing or otherwise acquiring network data over a period of time, and includes executing the method implemented by the processor to determine problems with the network. The information can then be provided for operaPetição 870190051907, dated 06/03/2019, p. 104/121 32/33 network operator so that the operator then takes corrective action on the network, for example, adding new base stations to the network to solve potential problems. [0091] According to the description, multiple proposals are designed for prediction of network performance indicator, and different conditions (based on the load) for the cells are defined to better use the force in different proposals to specifically target each cell condition. . In a set proposal, more than one predictive model is trained based on different test data for the prediction of high load growth cells and the results of the predictions are integrated to form a set of multiple algorithms. This improves the power of prediction compared to algorithms either using values passed only or using other independent variables to predict network performance indicators. [0092] The description provides an improvement in the efficiency with which cellular networks can be operated, thereby providing an improved service for network clients since network operators can predict future network problems and resolve them before the network failure or additional performance degradation. [0093] The description described here can be further implemented in an automated method of operating a cellular network. A network or system administrator can monitor any of the key performance or quality indicators described here and accumulate performance data for the network. The administrator can then apply the methods described here or use the cellular network monitoring system to generate a report which can predict future key quality values and performance indicators and based on the report, the administrator can take proactive actions on the network in order to keep the network in peak operating condition. Petition 870190051907, of 6/3/2019, p. 105/121 33/33 [0094] Although the subject has been described in specific language for structural characteristics and / or methodological acts, it should be understood that the subject defined in the attached claims is not necessarily limited to the specific characteristics or acts described above. Instead, the specific features and acts described above are described as exemplary ways of implementing the claims.
权利要求:
Claims (20) [1] 1. Method to predict cellular network performance based on performance indicator data observed from cells in the cellular network, characterized by the fact that it comprises: access a set of observed performance indicator data, the observed performance indicator data comprising performance indicators for the cellular network measured in a time sequence; classifying a cell based on the observed performance data, the classification including classifying the cell as a high load growth cell or a high non-growth load cell; based on the classification, compute a future value of at least one of the performance indicators using a predictive model based on test data for the cell, where the predictive model is derived from training data on the observed performance indicator data; and issue the future value of at least one of the performance indicators. [2] 2. Method according to claim 1, characterized by the fact that the observed performance indicator data set comprises performance indicator data for a period of time, and the computation comprises computing a set predictive model combining future values of the computation of at least two predictive models for high load growth cells. [3] 3. Method according to claim 2, characterized by the fact that the at least two predictive models comprise a first predictive model and a second predictive model, and each of the first predictive model and the second predictive model are Petition 870190051907, of 6/3/2019, p. 107/121 2/10 based on training data from a different high-load cell, all high-load cells and high non-load cells. [4] 4. Method according to claim 2, characterized by the fact that the at least two predictive models comprise a first predictive model, a second predictive model, and a third predictive model, each model predicting future high-load cell values for at least one performance indicator, and where the first predictive model is based on training data from high load growth cells and the future computation values using the set predictive model comprises computing the first predictive model with test data high load growth cells; the second predictive model is based on the training data of all high load cells and the computation using the set predictive model still comprises computing the second predictive model with the test data of high load growth cells; and the third predictive model is based on training data from all cells with high non-load growth and the computation using the set predictive model still comprises computing the predictive model with test data from high load growth cells. [5] 5. Method according to any of claims 3 and 4, characterized by the fact that the first predictive model comprises a autoregressive model. [6] 6. Method according to any of the claims 3-5, characterized by the fact that the second and third predictive models comprise k clustering models. Petition 870190051907, of 6/3/2019, p. 108/121 3/10 [7] 7. Method according to any one of claims 1-6, characterized by the fact that the performance indicator data set still includes validation data, and the method further comprises determining future values for said output: comparing predicted future values of at least one performance indicator calculated by each of the first predictive model, the second predictive model and the third predictive model to at least one real value of at least one performance indicator in the validation data, and if all the predicted future values are higher or lower than at least one real value, select a future value predicted by the predictive model that has the smallest relative error for the validation data; or if not all predicted future values are higher or lower than at least one actual value, then merge the predicted future values of each of the first predictive model, the second predictive model and the third predictive model into a set of predictive values weighting an error for each model against at least one real value of the performance indicator. [8] 8. Method according to claim 7, characterized in that the at least one real value of the at least one performance indicator comprises a plurality of real values of the at least one performance indicator, and in which the predicted future values comprise a plurality of future values of at least one performance indicator for each first predictive model, the second predictive model and the third predictive model, and the comparison comprises averaging the plurality of real values and averaging each of the plurality future values predicted by the first predictive model, the second predictive model and the third predictive model, and Petition 870190051907, of 6/3/2019, p. 109/121 4/10 said comparison comprises comparing an average value of the plurality of at least one performance indicator and an average value of each of the first predictive model, the second predictive model and the third predictive model. [9] 9. Method according to any of claims 7 and 8, characterized by the fact that said merger comprises calculating the set of predictive values by assigning a relatively higher weight to a future predictive value of the predictive model which has been determined to have a rate of lower prediction error and a relatively lower weight for a predicted future value of the predictive model which was determined to have a higher prediction error rate. [10] 10. Method according to any of claims 1-9, characterized by the fact that the classification further includes classifying cells in the data into medium-load cells, and computing a future value of at least one performance indicator includes using a predictive model based on training data of high load growth cells on observed performance indicator data, the predictive model calculated using test data from average load cells [11] 11. Cellular network monitoring system, characterized by the fact that when executed by one or more processors, they cause the one or more processors to perform the steps of: access a set of observed performance indicator data, the performance indicator data comprising performance indicators for a cellular network measured in a time sequence; compute future values of at least one performance indicator using a set predictive model based on test data in the performance indicator data set obPetition 870190051907, 06/06/2019, pg. 110/121 5/10 served, the training data for cells classified as high load growth cells or high non-load growth cells, the set predictive model derived from training data for at least a portion of cells classified as high load cells load on the observed performance indicator data and comprising at least two predictive models for high load growth cells; and issue a future value of at least one of the performance indicators. [12] 12. Cellular network monitoring system according to claim 11, characterized by the fact that the at least two predictive models comprise a first predictive model, a second predictive model, and a third predictive model, each model predicting future values for at least one performance indicator for a high load growth cell, and where the first predictive model is based on training data from high load growth cells and the future computation values using the set predictive model comprises computing the first predictive model with high load growth cell test data; the second predictive model is based on the training data of all high load cells and the future computation values using the set predictive model still comprises computing the second predictive model with the test data of high load growth cells; and the third predictive model is based on training data from high non-load growing cells and the future computation values using the set predictive model still comprises computing the third predictive model with the data from Petition 870190051907, of 6/3/2019, p. 111/121 6/10 test of high load growth cells. [13] 13. Cellular network monitoring system according to claim 12, characterized by the fact that the performance indicator data set still includes validation data, and instructions, when executed by one or more processors, cause the one or more processors determine future values of said output: comparing predicted future values of at least one performance indicator calculated by each of the first predictive model, the second predictive model and the third predictive model to at least one real value of at least one performance indicator in the validation data, and if all the predicted future values are higher or lower than the peto minus a real value, select a predicted future value from the predictive model that has the smallest relative error for the validation data; or if not all predicted future values are higher or lower than the peto minus a real value, then merge the predicted future values of each of the first predictive model, the second predictive model and the third predictive model into a single set of predictive values weighing an error of each model in relation to the peto minus a real value of the performance indicator. [14] 14. Cellular network monitoring system according to any of claims 12 and 13, characterized by the fact that the first predictive model comprises a trained autoregressive model using training data based on high load growth cells in the set of performance indicator data observed; in which the second predictive model comprises a modePetition 870190051907, dated 06/03/2019, p. 112/121 7/10 K group Io trained using training data based on all high load cells in the observed performance indicator data set; and in which the third predictive model comprises a k group model trained using training data based on cells with no high load growth in the observed performance indicator data set. [15] 15. Cellular network monitoring system according to any one of claims 13 and 14, characterized by the fact that the at least one real value of the at least one performance indicator comprises a plurality of real values of each of the at least a performance indicator, and in which the predicted future values comprise a plurality of future values of at least one performance indicator for each of the first predictive model, the second predictive model and the third predictive model and the comparison comprises calculating the average of the plurality of real values and calculate the average of each of the plurality of future values predicted by the first predictive model, the second predictive model and the third predictive model, and said comparison comprises comparing said average values. [16] 16. Cellular network monitoring system according to any one of claims 13-15, characterized by the fact that said merger comprises calculating the set of predictive values by assigning a relatively higher weight to a predictive value result of the model o which has been determined to have the lowest prediction error rate and a relatively lower weight is assigned to a predicted value result of the model which has ended up having a higher prediction error rate. [17] 17. Cellular network monitoring system, characterized by Petition 870190051907, of 6/3/2019, p. 113/121 8/10 realized by understanding: a non-transitory memory store comprising instructions; and one or more processors in communication with the memory, where the one or more processors execute the instructions to: access a set of performance indicator data observed from performance indicators for the cellular network; classifying a cell based on observed performance data including classifying the cell as one of a high-load cell or a high non-load cell; compute future values of at least one performance indicator using at least one set predictive model based on test data for cells classified as high load growth cells based on observed performance indicator data, the derived set predictive model of training data for at least a portion of cells classified as high load cells in the observed performance indicator data and comprising at least two predictive models for high load growth cells where each of the at least two predictive models are based in training data from a different high-load cell, all high-load cells and high non-load cells; and issue a future value of at least one of the performance indicators. [18] 18. Cellular network monitoring system according to claim 17, characterized by the fact that the one or more processors execute the instructions to compute a future value Petition 870190051907, of 6/3/2019, p. 114/121 9/10 of one of the first predictive model and the second predictive model based on a computed error rate of each said predictive model in relation to validation data for the performance indicator in the observed performance indicator data set. [19] 19. Cellular network monitoring system according to any of claims 17 and 18, characterized by the fact that it also includes a third predictive model, in which one or more processors execute the instructions to compute future values using the predictive model of set using the first predictive model with high load growth cell test data, and where the first predictive model is based on high load growth cell training data; compute future values using the set predictive model using the second predictive model with test data from high load growth cells, and the second predictive model is based on training data from all high load growth cells; and compute future values using the set model using the third predictive model, where the third predictive model is based on training data from high non-load-growing cells and where instructions stored in storage are operable to instruct at least one processor to compute future values using the third predictive model with test data from high load growth cells. [20] 20. Cellular network monitoring system according to claim 19, characterized by the fact that one or more processors execute the instructions to select a future value from one of the first predictive model and the second predictive model and the third predictive model with based on a computed error rate Petition 870190051907, of 6/3/2019, p. 115/121 10/10 of each said protective model in relation to validation data for the performance indicator in the observed performance indicator data set: comparing predicted future values of at least one performance indicator calculated by each of the first predictive model, the second predictive model and the third predictive model to at least one real value of at least one performance indicator in the validation data, and if all the predicted future values are higher or lower than at least one real value, select a predicted future value from the predictive model that has the smallest relative error for the validation data; or if not all predicted future values are higher or lower than at least one real value, then merge the predicted future values of each of the first predictive model, the second predictive model and the third predictive model into a set of predictive values assigning a relatively higher weight to a predictive value result of the model which was determined to have a lower prediction error rate and a relatively lower weight is assigned to a predicted value result of the model which was determined to have a higher prediction error rate.
类似技术:
公开号 | 公开日 | 专利标题 BR112019011414A2|2019-10-22|method for predicting cellular network performance and cellular network monitoring system Gao et al.2020|Machine learning based workload prediction in cloud computing US10887783B2|2021-01-05|Wireless network site survey systems and methods US7243049B1|2007-07-10|Method for modeling system performance JP2017194468A|2017-10-26|Storage battery evaluation device, power storage system, storage battery evaluation method, and computer program CN108513251A|2018-09-07|A kind of localization method and system based on MR data US7649853B1|2010-01-19|Method for keeping and searching network history for corrective and preventive measures US11128548B2|2021-09-21|Network element health status detection method and device US20190280950A1|2019-09-12|End-to-end it service performance monitoring US9651654B2|2017-05-16|Correcting device error radius estimates in positioning systems US20180349135A1|2018-12-06|Software development project system and method US11122467B2|2021-09-14|Service aware load imbalance detection and root cause identification US20130282331A1|2013-10-24|Detecting abnormal behavior EP3968159A1|2022-03-16|Performance monitoring in a distributed storage system US20210377811A1|2021-12-02|Service aware coverage degradation detection and root cause identification CN104320271B|2017-11-21|A kind of network equipment safety evaluation method and device WO2018037549A1|2018-03-01|Analysis server and analysis program WO2015086070A1|2015-06-18|Technique for counting objects in a telecommunications network CN107124727B|2020-05-12|PCI optimization method and device Ingram et al.2012|Using early stage project data to predict change-proneness Martinez-Julia et al.2020|Explained intelligent management decisions in virtual networks and network slices US20120109707A1|2012-05-03|Providing a status indication for a project Fakhfakh et al.2013|QoS aggregation for service orchestrations based on workflow pattern rules and MCDM method: evaluation at design time and runtime US11089436B2|2021-08-10|Method and determining unit for identifying optimal location| US20210341561A1|2021-11-04|Mobile device three-dimensional location service
同族专利:
公开号 | 公开日 CN109983798B|2021-02-23| WO2018103524A1|2018-06-14| CN109983798A|2019-07-05| EP3539316A4|2019-09-18| EP3539316A1|2019-09-18| EP3539316B1|2021-04-14| US9900790B1|2018-02-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US7984153B2|2009-06-24|2011-07-19|Verizon Patent And Licensing Inc.|System and method for analyzing domino impact of network growth| US9485667B2|2010-08-11|2016-11-01|Verizon Patent And Licensing Inc.|Qualifying locations for fixed wireless services| US9031561B2|2011-11-17|2015-05-12|Cisco Technology, Inc.|Method and system for optimizing cellular networks operation| CN103178990A|2011-12-20|2013-06-26|中国移动通信集团青海有限公司|Network device performance monitoring method and network management system| EP2750432A1|2012-12-28|2014-07-02|Telefónica, S.A.|Method and system for predicting the channel usage| US9439081B1|2013-02-04|2016-09-06|Further LLC|Systems and methods for network performance forecasting| US9204319B2|2014-04-08|2015-12-01|Cellco Partnership|Estimating long term evolution network capacity and performance| CN105636056B|2014-11-14|2021-04-02|中兴通讯股份有限公司|Energy-saving method, device and system for optimizing spectrum resources| US10911318B2|2015-03-24|2021-02-02|Futurewei Technologies, Inc.|Future network condition predictor for network time series data utilizing a hidden Markov model for non-anomalous data and a gaussian mixture model for anomalous data|EP3433953B1|2016-03-21|2020-12-02|Telefonaktiebolaget LM Ericsson |Target carrier radio predictions using source carrier measurements| US10977574B2|2017-02-14|2021-04-13|Cisco Technology, Inc.|Prediction of network device control plane instabilities| DE102017206631A1|2017-04-20|2018-10-25|Audi Ag|Method for detecting and determining a probability of failure of a radio network and central computer| US10405219B2|2017-11-21|2019-09-03|At&T Intellectual Property I, L.P.|Network reconfiguration using genetic algorithm-based predictive models| US10740656B2|2018-09-19|2020-08-11|Hughes Network Systems, Llc|Machine learning clustering models for determining the condition of a communication system| WO2020121084A1|2018-12-11|2020-06-18|Telefonaktiebolaget Lm Ericsson |System and method for improving machine learning model performance in a communications network| EP3895467A1|2018-12-11|2021-10-20|Telefonaktiebolaget Lm Ericsson |Method and system to predict network performance of a fixed wireless network| KR20200086464A|2019-01-09|2020-07-17|삼성전자주식회사|A method and apparatus for forecasting saturation of cell capacity in a wireless communication system| US10834610B2|2019-02-11|2020-11-10|T-Mobile Usa, Inc.|Managing LTE network capacity| WO2020180424A1|2019-03-04|2020-09-10|Iocurrents, Inc.|Data compression and communication using machine learning| FR3095100B1|2019-04-15|2021-09-03|Continental Automotive|Signal and / or service quality prediction method and associated device| US10708122B1|2019-10-30|2020-07-07|T-Mobile Usa, Inc.|Network fault detection and quality of service improvement systems and methods| US11190425B2|2019-12-30|2021-11-30|Viavi Solutions Inc.|Anomaly detection in a network based on a key performance indicator prediction model| US20210321301A1|2020-04-09|2021-10-14|Dish Wireless L.L.C.|Cellular network capacity slicing systems and methods| US11271797B2|2020-07-09|2022-03-08|Telefonaktiebolaget Lm Ericsson |Cell accessibility prediction and actuation|
法律状态:
2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US15/373,177|US9900790B1|2016-12-08|2016-12-08|Prediction of performance indicators in cellular networks| PCT/CN2017/111915|WO2018103524A1|2016-12-08|2017-11-20|Prediction of performance indicators in cellular networks| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|